skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Polinsky, Maria"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract This paper explores the concept of multiple grammars (MGs) and their implications for linguistic theory, language acquisition, and bilingual language knowledge. Drawing on evidence from phenomena such as scope interactions, verb raising, and agreement patterns, I argue that seemingly identical surface structures can be undergirded by different grammatical analyses that may compete within speaker populations. I then propose a typology of MG distributions, includingshared MGs, competing MGs,andpartial MGs, each with distinct consequences for acquisition and use. Contrary to expectations of simplification, bilingualism can sometimes lead to an expansion of grammatical analyses and does not always lead to the elimination of MGs. The paper discusses methods for predicting environments conducive to MGs, considering factors such as structural ambiguity and silent elements. The examination of MGs compels us to explore how learners navigate underdetermined input, especially in bilingual contexts, and to examine the interplay between gradient acceptability judgments and categorical grammatical distinctions. The study of MGs offers valuable insights into language variation, change, and the nature of linguistic competence. 
    more » « less
    Free, publicly-accessible full text available July 4, 2026
  2. Abstract This chapter examines the "Silent Problem" in heritage languages (HLs) - the systematic difficulty heritage speakers experience with silent grammatical elements compared to their overt counterparts. Through analysis of null pronouns and gaps in relative clauses, the study reveals consistent patterns where heritage speakers show reduced sensitivity to silent elements in both comprehension and production. The research demonstrates that heritage speakers maintain the basic licensing conditions for null elements but exhibit altered interpretive strategies. They show a stronger preference for subject antecedents in anaphoric dependencies than baseline speakers, following what the author terms the "Position of Antecedent Strategy" (PAS) - consistently choosing the highest structural argument as the antecedent regardless of pronoun type (null or overt). A pilot study on Russian relative clauses using weak crossover (WCO) effects reveals that baseline speakers exhibit bimodal grammar patterns - some using A-bar movement, others using coindexation - while heritage speakers uniformly employ coindexation structures, suggesting a shift from syntactic to anaphoric dependencies. This represents a preference for "Merge over Move" operations in heritage grammars. The chapter identifies several factors contributing to these patterns: (1) reduced perceptual salience of silent elements, (2) heritage speakers' aversion to scalar principles in favor of equipollent oppositions, and (3) difficulty establishing long-distance dependencies. Production data shows heritage speakers often use resumptive pronouns instead of gaps in relative clauses, reflecting the general avoidance of silent elements. The study draws parallels between heritage languages and endangered languages, suggesting these patterns reflect universal consequences of reduced linguistic exposure. The findings contribute to understanding how grammatical representations restructure under conditions of limited input, with implications for theories of bilingual language development and syntactic processing. 
    more » « less
    Free, publicly-accessible full text available March 27, 2026
  3. Abstract This paper presents and analyzes antipassive constructions in the Mayan language Kaqchikel. Through various syntactic tests, we show that antipassive constructions differ from both active transitive and Agent Focus structures in that they do not syntactically project a DP-sized object. Thus, we should think of antipassives as a type of unergative. When an object seems to disappear or become less important in an antipassive, this is not a special feature of antipassives – it is simply what happens in any intransitive structure. In other words, the ‘suppression’ or ‘demotion’ of thematic object is not an inherent characteristic of the construction but rather a byproduct of its intransitive nature. To better understand how transitive and intransitive constructions function cross-linguistically, we propose a novel framework for categorizing the functional heads v and Voice. We show that the external argument behaves differently in transitive versus intransitive clauses, appearing in different structural positions, which is backed up by evidence from causatives in Kaqchikel and scope patterns in other languages. While transitive and passive structures include a Voice projection, Agent Focus and antipassive structures do not. We compare our analysis to previous work on antipassives and explore what our findings might mean for understanding antipassives in other languages. 
    more » « less
    Free, publicly-accessible full text available March 10, 2026
  4. Polysynthetic languages present a challenge for morphological analysis due to the complexity of their words and the lack of high-quality annotated datasets needed to build and/or evaluate computational models. The contribution of this work is twofold. First, using linguists’ help, we generate and contribute high-quality annotated data for two low-resource polysynthetic languages for two tasks: morphological segmentation and part-of-speech (POS) tagging. Second, we present the results of state-of-the-art unsupervised approaches for these two tasks on Adyghe and Inuktitut. Our findings show that for these polysynthetic languages, using linguistic priors helps the task of morphological segmentation and that using stems rather than words as the core unit of abstraction leads to superior performance on POS tagging. 
    more » « less
  5. Polysynthetic languages present a challenge for morphological analysis due to the complexity of their words and the lack of high-quality annotated datasets needed to build and/or evaluate computational models. The contribution of this work is twofold. First, using linguists’ help, we generate and contribute high-quality annotated data for two low-resource polysynthetic languages for two tasks: morphological segmentation and part-of-speech (POS) tagging. Second, we present the results of state-of-the-art unsupervised approaches for these two tasks on Adyghe and Inuktitut. Our findings show that for these polysynthetic languages, using linguistic priors helps the task of morphological segmentation and that using stems rather than words as the core unit of abstraction leads to superior performance on POS tagging. 
    more » « less
  6. Unsupervised cross-lingual projection for part-of-speech (POS) tagging relies on the use of parallel data to project POS tags from a source language for which a POS tagger is available onto a target language across word-level alignments. The projected tags then form the basis for learning a POS model for the target language. However, languages with rich morphology often yield sparse word alignments because words corresponding to the same citation form do not align well. We hypothesize that for morphologically complex languages, it is more efficient to use the stem rather than the word as the core unit of abstraction. Our contributions are: 1) we propose an unsupervised stem-based cross-lingual approach for POS tagging for low-resource languages of rich morphology; 2) we further investigate morpheme-level alignment and projection; and 3) we examine whether the use of linguistic priors for morphological segmentation improves POS tagging. We conduct experiments using six source languages and eight morphologically complex target languages of diverse typologies. Our results show that the stem-based approach improves the POS models for all the target languages, with an average relative error reduction of 10.3% in accuracy per target language, and outperforms the word-based approach that operates on three-times more data for about two thirds of the language pairs we consider. Moreover, we show that morpheme-level alignment and projection and the use of linguistic priors for morphological segmentation further improve POS tagging. 
    more » « less
  7. With the increasing interest in low-resource languages, unsupervised morphological segmentation has become an active area of research, where approaches based on Adaptor Grammars achieve state-of-the-art results. We demonstrate the power of harnessing linguistic knowledge as priors within Adaptor Grammars in a minimally-supervised learning fashion. We introduce two types of priors: 1) grammar definition, where we design language-specific grammars; and 2) linguistprovided affixes, collected by an expert in the language and seeded into the grammars. We use Japanese and Georgian as respective case studies for the two types of priors and introduce new datasets for these languages, with gold morphological segmentation for evaluation. We show that the use of priors results in error reductions of 8.9 % and 34.2 %, respectively, over the equivalent state-of-the-art unsupervised system 
    more » « less